Bayesian Shrinkage Variable Selection April 25 , 2008

نویسنده

  • Russell L. Zaretzki
چکیده

We introduce a new Bayesian approach to the variable selection problem which we term Bayesian Shrinkage Variable Selection (BSVS ). This approach is inspired by the Relevance Vector Machine (RVM ), which uses a Bayesian hierarchical linear setup to do variable selection and model estimation. RVM is typically applied in the context of kernel regression although it is also suitable in the standard regression context. Extending the RVM algorithm, we include a proper prior distribution for the precisions of the regression coefficients, v j ∼ f(v −1 j |η), where η is a scaler hyperparameter. Based upon this model, we derive the full set of conditional distributions for parameters as would typically be done when applying Gibbs sampling. However, instead of simulating samples from the joint posterior distribution in order to estimate the posterior means of the parameters, we use the full conditionals in order

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bayesian Shrinkage Variable Selection

We introduce a new Bayesian approach to the variable selection problem which we term Bayesian Shrinkage Variable Selection (BSVS ). This approach is inspired by the Relevance Vector Machine (RVM ), which uses a Bayesian hierarchical linear setup to do variable selection and model estimation. RVM is typically applied in the context of kernel regression although it is also suitable in the standar...

متن کامل

Shrinkage Tuning Parameter Selection with a Diverging Number of Parameters

Contemporary statistical research frequently deals with problems involving a diverging number of parameters. For those problems, various shrinkage methods (e.g., LASSO, SCAD, etc) are found particularly useful for the purpose of variable selection (Fan and Peng, 2004; Huang et al., 2007b). Nevertheless, the desirable performances of those shrinkage methods heavily hinge on an appropriate select...

متن کامل

DECOUPLING SHRINKAGE AND SELECTION IN BAYESIAN LINEAR MODELS: A POSTERIOR SUMMARY PERSPECTIVE By P. Richard Hahn and Carlos M. Carvalho Booth School of Business and McCombs School of Business

LINEAR MODELS: A POSTERIOR SUMMARY PERSPECTIVE By P. Richard Hahn and Carlos M. Carvalho Booth School of Business and McCombs School of Business Selecting a subset of variables for linear models remains an active area of research. This paper reviews many of the recent contributions to the Bayesian model selection and shrinkage prior literature. A posterior variable selection summary is proposed...

متن کامل

Variable Inclusion and Shrinkage Algorithms

The Lasso is a popular and computationally efficient procedure for automatically performing both variable selection and coefficient shrinkage on linear regression models. One limitation of the Lasso is that the same tuning parameter is used for both variable selection and shrinkage. As a result, it typically ends up selecting a model with too many variables to prevent over shrinkage of the regr...

متن کامل

E-Bayesian Approach in A Shrinkage Estimation of Parameter of Inverse Rayleigh Distribution under General Entropy Loss Function

‎Whenever approximate and initial information about the unknown parameter of a distribution is available, the shrinkage estimation method can be used to estimate it. In this paper, first the $ E $-Bayesian estimation of the parameter of inverse Rayleigh distribution under the general entropy loss function is obtained. Then, the shrinkage estimate of the inverse Rayleigh distribution parameter i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008